从单眼RGB图像中重建3D手网络,由于其在AR/VR领域的巨大潜在应用,引起了人们的注意力越来越多。大多数最先进的方法试图以匿名方式解决此任务。具体而言,即使在连续录制会话中用户没有变化的实际应用程序中实际上可用,因此忽略了该主题的身份。在本文中,我们提出了一个身份感知的手网格估计模型,该模型可以结合由受试者的内在形状参数表示的身份信息。我们通过将提出的身份感知模型与匿名对待主题的基线进行比较来证明身份信息的重要性。此外,为了处理未见测试对象的用例,我们提出了一条新型的个性化管道来校准固有的形状参数,仅使用该受试者的少数未标记的RGB图像。在两个大型公共数据集上进行的实验验证了我们提出的方法的最先进性能。
translated by 谷歌翻译
最近,视觉变压器及其变体在人类和多视图人类姿势估计中均起着越来越重要的作用。将图像补丁视为令牌,变形金刚可以对整个图像中的全局依赖项进行建模或其他视图中的图像。但是,全球关注在计算上是昂贵的。结果,很难将这些基于变压器的方法扩展到高分辨率特征和许多视图。在本文中,我们提出了代币螺旋的姿势变压器(PPT)进行2D人姿势估计,该姿势估计可以找到粗糙的人掩模,并且只能在选定的令牌内进行自我注意。此外,我们将PPT扩展到多视图人类姿势估计。我们建立在PPT的基础上,提出了一种新的跨视图融合策略,称为人类区域融合,该策略将所有人类前景像素视为相应的候选者。可可和MPII的实验结果表明,我们的PPT可以在减少计算的同时匹配以前的姿势变压器方法的准确性。此外,对人类360万和滑雪姿势的实验表明,我们的多视图PPT可以有效地从多个视图中融合线索并获得新的最新结果。
translated by 谷歌翻译
除局部相关性外,开放域的Factoid问题回答的段落排名还需要一个段落以包含答案(答案)。尽管最近的一些研究将一些阅读能力纳入了排名者以说明答复性,但排名仍然受到该领域通常可用的训练数据的嘈杂性质的阻碍,这将考虑任何包含答案实体作为正样本的段落。但是,段落中的答案实体不一定与给定的问题有关。为了解决该问题,我们提出了一种基于生成对抗性神经网络的通道重新管理的方法,称为\ ttt {pregan},除了局部相关性外,还结合了关于答复性的歧视者。目的是强迫发电机对局部相关的段落进行排名,并包含答案。五个公共数据集的实验表明,\ ttt {pregan}可以更好地对适当的段落进行排名,从而提高质量检查系统的有效性,并在不使用外部数据的情况下优于现有方法。
translated by 谷歌翻译
差异图像注册是医学图像分析中的至关重要任务。最近基于学习的图像注册方法利用卷积神经网络(CNN)学习图像对之间的空间转换并达到快速推理速度。但是,这些方法通常需要大量的培训数据来提高其概括能力。在测试时间内,基于学习的方法可能无法提供良好的注册结果,这很可能是因为培训数据集的模型过于拟合。在本文中,我们提出了连续速度场(NEVF)的神经表示,以描述两个图像之间的变形。具体而言,该神经速度场为空间中的每个点分配了一个速度向量,该速度在对复杂变形场进行建模时具有更高的灵活性。此外,我们提出了一种简单的稀疏抽样策略,以减少差异注册的记忆消耗。提出的NEVF还可以与预先训练的基于学习的模型合并,该模型的预测变形被视为优化的初始状态。在两个大规模3D MR脑扫描数据集上进行的广泛实验表明,我们提出的方法的表现优于最先进的注册方法。
translated by 谷歌翻译
估计每个视图中的2D人类姿势通常是校准多视图3D姿势估计的第一步。但是,2D姿势探测器的性能遭受挑战性的情况,例如闭塞和斜视角。为了解决这些挑战,以前的作品从eMipolar几何中的不同视图之间导出点对点对应关系,并利用对应关系来合并预测热插拔或特征表示。除了后预测合并/校准之外,我们引入了用于多视图3D姿势估计的变压器框架,其目的地通过将来自不同视图的信息集成信息来直接改善单个2D预测器。灵感来自先前的多模态变压器,我们设计一个统一的变压器体系结构,命名为输送,从当前视图和邻近视图中保险。此外,我们提出了eMipolar字段的概念来将3D位置信息编码到变压器模型中。由Epipolar字段引导的3D位置编码提供了一种有效的方式来编码不同视图的像素之间的对应关系。人类3.6M和滑雪姿势的实验表明,与其他融合方法相比,我们的方法更有效,并且具有一致的改进。具体而言,我们在256 x 256分辨率上只有5米参数达到人类3.6米的25.8毫米MPJPE。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Learning feature interactions is the key to success for the large-scale CTR prediction and recommendation. In practice, handcrafted feature engineering usually requires exhaustive searching. In order to reduce the high cost of human efforts in feature engineering, researchers propose several deep neural networks (DNN)-based approaches to learn the feature interactions in an end-to-end fashion. However, existing methods either do not learn both vector-wise interactions and bit-wise interactions simultaneously, or fail to combine them in a controllable manner. In this paper, we propose a new model, xDeepInt, based on a novel network architecture called polynomial interaction network (PIN) which learns higher-order vector-wise interactions recursively. By integrating subspace-crossing mechanism, we enable xDeepInt to balance the mixture of vector-wise and bit-wise feature interactions at a bounded order. Based on the network architecture, we customize a combined optimization strategy to conduct feature selection and interaction selection. We implement the proposed model and evaluate the model performance on three real-world datasets. Our experiment results demonstrate the efficacy and effectiveness of xDeepInt over state-of-the-art models. We open-source the TensorFlow implementation of xDeepInt: https://github.com/yanyachen/xDeepInt.
translated by 谷歌翻译
In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译